Neuro-evolution Methods for Designing Emergent Specialization
نویسنده
چکیده
This research applies the Collective Specialization NeuroEvolution (CONE) method to the problem of evolving neural controllers in a simulated multi-robot system. The multi-robot system consists of multiple pursuer (predator) robots, and a single evader (prey) robot. The CONE method is designed to facilitate behavioral specialization in order to increase task performance in collective behavior solutions. PursuitEvasion is a task that benefits from behavioral specialization. The performance of prey-capture strategies derived by the CONE method, are compared to those derived by the Enforced Sub-Populations (ESP) method. Results indicate that the CONE method effectively facilitates behavioral specialization in the team of pursuer robots. This specialization aids in the derivation of robust prey-capture strategies. Comparatively, ESP was found to be not as appropriate for facilitating behavioral specialization and effective prey-capture behaviors.
منابع مشابه
Collective neuro-evolution for evolving specialized sensor resolutions in a multi-rover task
This article presents results from an evaluation of the collective neuro-evolution (CONE) controller design method. CONE solves collective behavior tasks, and increases task performance via facilitating emergent behavioral specialization. Emergent specialization is guided by genotype and behavioral specialization difference metrics that regulate genotype recombination. CONE is comparatively tes...
متن کاملNeuro-Evolution Methods for Gathering and Collective Construction
This paper evaluates the Collective Neuro-Evolution (CONE) method, comparative to a related controller design method, in a simulated multi-robot system. CONE solves collective behavior tasks, and increases task performance via facilitating behavioral specialization. Emergent specialization is guided by genotype and behavioral specialization di erence metrics that regulate genotype recombination...
متن کاملOn Feasibility of Adaptive Level Hardware Evolution for Emergent Fault Tolerant Communication
A permanent physical fault in communication lines usually leads to a failure. The feasibility of evolution of a self organized communication is studied in this paper to defeat this problem. In this case a communication protocol may emerge between blocks and also can adapt itself to environmental changes like physical faults and defects. In spite of faults, blocks may continue to function since ...
متن کاملEecient Reinforcement Learning through Symbiotic Evolution
This article presents a novel reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, eecient genetic search and prevents convergence to subopti-mal solutions. I...
متن کاملcient Reinforcement Learning through Symbiotic
This article presents a novel reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, eecient genetic search and prevents convergence to subopti-mal solutions. I...
متن کامل